Pseudo-label self-training model for transfer learning algorithm

نویسندگان

چکیده

Abstract When aligning joint distributions between domains, the existing transfer learning algorithms usually assign pseudo labels due to lack of in target domain. However, noise will affect performance learning. Pseudo-label self-training for (PST-TL) model is proposed generate reliable domain and have a wide range applications algorithms. Pseudo are predicted by an ensemble classifier using absolute majority vote, successfully considered be high confidence. The training applies strategy, adding strongly stable data set classifier. semi-supervised unsupervised tasks experiment show that algorithm can significantly improve after embedded PST-TL model.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Domain Adaptation for Learning from Label Proportions Using Self-Training

Learning from Label Proportions (LLP) is a machine learning problem in which the training data consist of bags of instances, and only the class label distribution for each bag is known. In some domains label proportions are readily available; for example, by grouping social media users by location, one can use census statistics to build a classifier for user demographics. However, label proport...

متن کامل

Label Embedding for Transfer Learning

Automatically tagging textual mentions with the concepts, types and entities that they represent are important tasks for which supervised learning has been found to be very effective. In this paper, we consider the problem of exploiting multiple sources of training data with variant ontologies. We present a new transfer learning approach based on embedding multiple label sets in a shared space,...

متن کامل

Label Embedding Approach for Transfer Learning

Automatically tagging textual mentions with the concepts, types and entities that they represent are important tasks for which supervised learning has been found to be very effective. In this paper, we consider the problem of exploiting multiple sources of training data with variant ontologies. We present a new transfer learning approach based on embedding multiple label sets in a shared space,...

متن کامل

Transfer Learning for OCRopus Model Training on Early Printed Books

A method is presented that significantly reduces the character error rates for OCR text obtained from OCRopus models trained on early printed books when only small amounts of diplomatic transcriptions are available. This is achieved by building from already existing models during training instead of starting from scratch. To overcome the discrepancies between the set of characters of the pretra...

متن کامل

Training Principle-Based Self-Explanations: Transfer to New Learning Contents

The present study tested the transfer effects of a short training intervention on principled-based self-explanations. The intervention used fables as well as mathematics examples and problems as "exemplifying" domains for training such selfexplanations. The effects were tested in a new learning environment about attribution theory and feedback messages. In this experiment, 58 German high-school...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of physics

سال: 2023

ISSN: ['0022-3700', '1747-3721', '0368-3508', '1747-3713']

DOI: https://doi.org/10.1088/1742-6596/2522/1/012008